17 Apr 2026 14:00 CEST

From synthetic data to neural network deployment - Choosing the optimal device for each task

Ric Dengel

University of Tartu

Future space exploration, particularly fast fly-by missions of dynamically new comets, presents extreme challenges for onboard data processing. With limited observation windows, critical proximity risks, and constrained downlink budgets, spacecraft require high-reliability, low-latency machine learning (ML) systems on edge devices (such as FPGAs) to rapidly process, segment, and prioritize scientific images. However, developing these systems faces two major hurdles: the lack of training data for unprecedented targets and the immense complexity of selecting the ideal neural network architecture for a specific hardware platform. This presentation explores a comprehensive, end-to-end workflow designed to bridge the gap between initial mission concepts and physical hardware deployment. The talk will first address the data scarcity challenge, demonstrating how synthetic data generation—paired with robust domain transfer techniques—can yield models capable of generalizing effectively to real-world application domains, such as actual Rosetta comet imagery. Furthermore, the presentation will dive into the critical phases of model and deployment optimization. Utilizing an automated deployment pipeline for quantization and compilation, we will systematically benchmark a variety of network architectures (including UNet, Deeplabv3 and YOLO11). By evaluating these quantized models across multiple hardware targets—ranging from the K26 and Zynq UltraScale+ (ZCU102/ZCU111) to the Versal Edge VE2302—we will analyze the complex trade-offs between model accuracy (Dice score), throughput (FPS), latency, and power consumption.

Hamburger icon
Menu
Advanced Concepts Team